Current Issue : October-December Volume : 2023 Issue Number : 4 Articles : 5 Articles
More than 60 years has passed since the installation of the first robot in an industrial context. Since then, industrial robotics has seen great advancements and, today, robots can collaborate with humans in executing a wide range of working activities. Nevertheless, the impact of robots on human operators has not been deeply investigated. To address this problem, we conducted an empirical study to measure the errors performed by two groups of people performing a working task through a virtual reality (VR) device. A sample of 78 engineering students participated in the experiments. The first group worked with a robot, sharing the same workplace, while the second group worked without the presence of a robot. The number of errors made by the participants was collected and analyzed. Although statistical results show that there are no significant differences between the two groups, qualitative analysis proves that the presence of the robot led to people paying more attention during the execution of the task, but to have a worse learning experience....
The article presents an overview of several studies in the field of Brain–Computer Interfaces (BCIs), the requirements for the architecture of such promising devices, as well as multi-modal BCI for drone control in a smart-city environment. Distinctive features of the proposed solution are the simplicity of the architecture (the use of only one smartphone for both receiving and processing bio-signals from the headset and transmitting commands to the drone), an open-source software solution for signal processing, generating, and sending commands to the unmanned aerial vehicle (UAV), as well as multimodality of the BCI (the use of both electroencephalographic (EEG) and electrooculographic (EOG) signals of the operator). For bio-signal acquisition, we used the NeuroSky Mindwave Mobile 2 headset, which is connected to an Android-based smartphone via Bluetooth. The developed Android application (Tello NeuroSky) processes signals from the headset and generates and transmits commands to the DJI Tello UAV viaWi-Fi. The decrease (depression) and increase of α- and β-rhythms of the brain, as well as EOG signals that occur during blinking were the triggers for UAV commands. The developed software allows the manual setting of the minimum, maximum and threshold values for the processed bio-signals. The following commands for the UAV were implemented: take-off, landing, forward movement, and backwards movement. Two threads of the smartphone’s central processing unit (CPU) were utilized when processing signals in the software to increase the performance: for signal processing (1-D Daubechies 2 (db2) wavelet transform) and updating data on the diagrams, and for generating and transmitting commands to the drone....
Maintaining stable and reliable working conditions is a matter of vital importance for various companies, especially those involving heavy machinery. Due to human exhaustion, as well as unpredicted hazards and dangerous situations, the personnel has to take actions and wisely plan each move. This paper presents a human–computer interaction (HCI)-based system that uses a concentration level measurement function to increase the safety of machine and equipment operators. The system has been developed in response to the results of user experience (UX) analyses of the state of occupational safety, which indicate that the most common cause of accidents is the so-called insufficient concentration while performing work. The paper presents the reasons for addressing this issue and a description of the proposed electroencephalography (EEG)-based solution in the form of a concentration measurement system concept. We discuss in-field measurements of such a prototype solution, together with an analysis of obtained results. The method of implementing a wireless communication interface is also provided, along with a visualization application....
Multimodal data fusion (electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS)) has been developed as an important neuroimaging research field in order to circumvent the inherent limitations of individual modalities by combining complementary information from other modalities. This study employed an optimization-based feature selection algorithm to systematically investigate the complementary nature of multimodal fused features. After preprocessing the acquired data of both modalities (i.e., EEG and fNIRS), the temporal statistical features were computed separately with a 10 s interval for each modality. The computed features were fused to create a training vector. A wrapper-based binary enhanced whale optimization algorithm (E-WOA) was used to select the optimal/efficient fused feature subset using the support-vector-machine-based cost function. An online dataset of 29 healthy individuals was used to evaluate the performance of the proposed methodology. The findings suggest that the proposed approach enhances the classification performance by evaluating the degree of complementarity between characteristics and selecting the most efficient fused subset. The binary E-WOA feature selection approach showed a high classification rate (94.22 ± 5.39%). The classification performance exhibited a 3.85% increase compared with the conventional whale optimization algorithm. The proposed hybrid classification framework outperformed both the individual modalities and traditional feature selection classification (p < 0.01). These findings indicate the potential efficacy of the proposed framework for several neuroclinical applications....
In recent years, with the widespread popularity of mobile devices, gesture recognition as a way of human-computer interaction has received more and more attention. However, existing gesture recognition methods have their limitations, such as requiring additional hardware devices, invading user privacy, and causing difficulty in data collection. To address these issues, we propose SonicGest, a recognition system that utilizes acoustic signals to sense in-air gestures. The system only needs the builtin speaker and microphone of the smartphone, without any additional hardware and no privacy disclosure. SonicGest transforms the features of the acoustic Doppler effect caused by gesture movements into a spectrogram, uses spectrogram enhancement techniques to remove noise interference, and then builds a convolutional neural network (CNN) classification model to recognize different gestures. To solve the problem of data collection difficulties, we utilize the Wasserstein distance based on gradient penalty to optimize the loss function of the generative adversarial network (GAN) to generate high-quality spectrograms to expand the dataset. The experimental results show that SonicGest has a recognition accuracy of 98.9% for ten kinds of gestures....
Loading....